-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proof of concept; client/server architecture with shared cache #23
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a bit big....
Factored the client server stuff out to https://github.com/UQ-PAC/aslp-rpc/tree/marshall-rpc-clientserver. It would be nice if opam locks or pin deps worked and we could transitively pin asli and the rpc library but instead they both need to be manually pinned. We should make sure the nix build work on this branch before merging |
i'm sorry, can you bring the cache back please? not jane street though |
Cache has been added back upstream UQ-PAC/aslp-rpc#9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm again.
it would be nice to update the readme with client/server usage, but this can be a follow up PR
I'll do that in the next pr and package it for the opam repo at the same time. |
When lifting many examples (such as Basil SystemTests) the 2 second evaluation environment initialisation is a significant penalty, this PR aims to avoid this, and allow sharing a cache between invocations. It should produce identical output.
--local
flag. This means there is only one evaluation env that lives in the server. The server only uses Lwt, so there is no parallelism in lifting, but there is enough time saved elsewhere for this to not matter.gtirb_semantics --serve
starts the server listening on a unix domain socketE.g. running all the gcc system tests gives a 2x speedup
compile & lift GCC SystemTests gtirb_semantics with env & cache initialized on each client:
make -j10 3145.71s user 457.15s system 991% cpu 6:03.26 total
compile & lift GCC System tests gtirb_sematnics with client-server & shared cache:
make -j10 1457.22s user 316.56s system 983% cpu 3:00.40 total
Server:
Note that there were only 2312 cache misses (91% cache hit rate) due to most of these programs being common setup code emitted by gcc.
Lifting cntlm:
client:
server:
We can see the client spends all its time asleep waiting for the socket (0.6s of work out of the 58s of elapsed time).
When executing locally we have a similar amount of time, since for large programs the initialisation doesn't play as much of a role in comparison to the IPC overhead: